Skip to main content

Random Variables Convergence

From the analysis courses, we somehow learn the convergence of sequence, the convergence of series, etc. Probability as the branch of matrics is also a branch of analysis. So, we can also talk about the convergence of probability.

Convergence in Probability

Let X1,X2,X_1, X_2,\ldots be an infinite sequence of random variables and let YY be another random variable. Then the sequence {Xn}\{X_n\} converges in probability to YY if ϵ>0,limnP(XnY>ϵ)=0\forall \epsilon > 0, \lim\limits_{n\to\infty} P(|X_n - Y| > \epsilon) = 0. We denote it as XnPYX_n \overset{P}\to Y

Weak Law of Large Number: Let X1,X2,X_1, X_2,\ldots be a sequence of independent random variables, each having same mean μ\mu. And thei variance σ2v\sigma^2 \le v where v<v < \infty. Then ϵ>0,limnP(Mnμ>ϵ)=0\forall \epsilon >0, \lim_{n\to \infty}P(|M_n - \mu|> \epsilon) = 0 or we say MnpμM_n \overset{p}\to \mu where Mn=inXinM_n = \frac{\sum_i^n X_i}{n}

  • Can be prove by Chebychev's Inequality P(Mnμϵ)Var[Mn]ϵ2νnϵ2P(|M_n - \mu| \ge \epsilon)\le \frac{Var[M_n]}{\epsilon^2} \le \frac{\nu}{n\epsilon^2} and limnν/(nϵ2)=0\lim\limits_{n\to \infty} \nu/(n\epsilon^2) = 0
  • where E[Mn]=μ,Var[Mn]=1/n2iVar[Xi]nν/n2=ν/nE[M_n] = \mu, Var[M_n] = 1/n^2\sum_i Var[X_i] \le n\nu/n^2 = \nu/n

Almost Sure Convergence

Let X1,X2,X_1, X_2,\ldots be a sequence of random variables. Let XX be another random variable. the sequence {Xn}\{X_n\} is almost surely to XX if P({s:Xn(s)PX(s)})=1P(\{s: X_n(s) \overset{P}\to X(s) \}) = 1. We denote it as Xna.s.XX_n \overset{\text{a.s.}}\to X. Sometimes we also call it Convergence with probability 1.

  • P(limnXn=X)=1    P(limnXnX>ϵ i.o)=0P(\lim\limits_{n\to \infty} X_n = X) = 1 \implies P(\lim\limits_{n\to \infty} |X_n - X| > \epsilon \text{ i.o}) = 0
  • Xna.s.X    XnPXX_n \overset{\text{a.s.}}\to X \implies X_n \overset{P}\to X no converse
  • Almost surely not says limnE[Xn]=E[X]\lim\limits_{n\to \infty} E[X_n] = E[X], but E[limnXn]=E[X]E[\lim\limits_{n\to \infty} X_n] = E[X].

E.g. S=[0,1],UUniform(0,1)S=[0,1], U \sim \text{Uniform}(0,1), let Xn(s)=I(U>1n2)X_n(s) = I(U > \frac{1}{n^2}) then Xna.s.1X_n \overset{\text{a.s.}}\to 1 by Bonel Canelli Lemma.

  • Let ϵ>0\epsilon > 0, X,s.t.,n=1P(XnX>ϵ)<    Xna.s.X\exists X, s.t., \sum_{n=1}^{\infty} P(|X_n - X| > \epsilon) < \infty \implies X_n \overset{\text{a.s.}}\to X. Assume X=1X=1. Then we have n=1P(Xn1>ϵ)=n=1P(Xn1>ϵ)+P(Xn1>ϵ)=n=1P(1Xn>ϵ)=n=1P(Xn=0)=n=1P(U1n2)=n=11n2=π26<\sum_{n=1}^{\infty} P(|X_n - 1| > \epsilon) = \sum_{n=1}^{\infty} P(X_n - 1 > \epsilon) + P(-X_n - 1 > \epsilon) = \sum_{n=1}^{\infty}P(1-X_n > \epsilon) = \sum_{n=1}^{\infty} P(X_n = 0) = \sum_{n=1}^{\infty} P(U \le \frac{1}{n^2}) = \sum_{n=1}^{\infty} \frac{1}{n^2} = \frac{\pi^2}{6} < \infty which Xna.s.1X_n \overset{\text{a.s.}}\to 1 by Bonel Canelli Lemma.

Strong Law of Large Number: Let X1,X2,X_1, X_2,\ldots be a sequence of indpendent random variables. Then ϵ>0,P(limnMn=μ)=1\forall \epsilon > 0, P(\lim_{n\to \infty} M_n = \mu) = 1 or we say Mna.s.μM_n \overset{\text{a.s.}}\to \mu

BOUNDED CONVERGENCE THEOREM: If Xna.s.XX_n \overset{a.s.}\to X and XnX_n uniformly bounded (M<,XnM,n\exists M < \infty, |X_n| \le M, \forall n), then limnE[Xn]=E[X]\lim\limits_{n\to \infty} E[X_n] = E[X].

MONOTONE CONVERGENCE THEOREM: Xna.s.XX_n \overset{a.s.}\to X and 0X1X2    limnE[Xn]=E[X]0\le X_1\le X_2\le \ldots \implies \lim\limits_{n\to \infty} E[X_n] = E[X].

DOMINATED CONVERGENCE THEOREM: Xna.s.XX_n \overset{a.s.}\to X and there is another random variable YY with E[Y]<E[|Y|] < \infty and XnY,n    limnE[Xn]=E[X]|X_n| \le Y, \forall n \implies \lim\limits_{n\to \infty} E[X_n] = E[X].

Convergence in Distribution

Let X1,X2,X_1, X_2,\ldots be a sequence of random variables. Let XX be another random variable. Then the sequence {Xn}\{X_n\} converges in distribution to XX if xR,P(X=x)=0    limnP(Xnx)=P(Xx)\forall x\in \R, P(X=x) = 0 \implies \lim_{n\to \infty} P(X_n \le x) = P(X\le x). We denote it as XnDXX_n \overset{D}\to X (continuous measure = 0)

  • XnPX    XnDXX_n \overset{P}\to X \implies X_n \overset{D}\to X (converse is false)
  • Xna.s.X    XnDXX_n \overset{\text{a.s.}}\to X \implies X_n \overset{D}\to X (converse is false)

Central Limit Theorem (CLT): Let X1,X2,X_1, X_2,\ldots be a sequence of i.i.di.i.d random variables with mean μ\mu and variance σ2\sigma^2. Let Sn=i=1nXn,Mn=Xˉ=Sn/nS_n = \sum_{i =1} ^n X_n, M_n = \bar X = S_n/n, then Zn=Snnμnσ=Mnnμσ/n=nMnμσZ_n = \frac{S_n - n\mu}{\sqrt{n}\sigma} = \frac{M_n - n\mu}{\sigma/\sqrt{n}} = \sqrt{n}\frac{M_n - \mu}{\sigma}. n    ZnDZn\to \infty \implies Z_n \overset{D}\to Z where ZZ is the standard normal distribution ZN(0,1)Z \sim N(0,1)

SLUTSKY'S Lemma: Let XnX_n and YnY_n be different sequence of random variables

  • XnDXX_n \overset{D}\to X and YnPY    Xn+YnPX+YY_n \overset{P}\to Y \implies X_n + Y_n \overset{P}\to X + Y
    • if both P\overset{P}\to, only true will limit is finite.
  • XnDXX_n \overset{D}\to X and YnPY    XnYnPXYY_n \overset{P}\to Y \implies X_nY_n \overset{P}\to X Y

Continuous Mapping Theorem: Let XnXX_n \to X and gg be an absolutely continuous function, then g(Xn)g(X)g(X_n) \to g(X) (i.e. for all three convergence)